110 research outputs found
Integrating Refinement into Software Development Tools
AbstractIt is a challenge for automatic tool support to formal design by refinement transformations. In this paper, we bring this matter to the attention of the research community and discuss a component-based model transformational approach for integrating refinement into software development tools. Models, their consistency and correctness, in an object-oriented and component-based development process are defined in rCOS, that is a refinement calculus recently developed at UNU-IIST. Correctness preserving transformations between models are formalized and proved as refinement rules in rCOS. In this paper, we will discuss on how these transformations can be implemented in the relations language of Query/View/Transformation (QVT) standardized by OMG
Search-based composed refactorings
Refactorings are commonly applied to source code to improve itsstructure and maintainability. Integrated development environments(IDEs) such as Eclipse or NetBeans offer refactoring support for variousprogramming languages. Usually, the developer makes a particularselection in the source code, and chooses to apply one of the refactorings,which is then executed (with suitable pre-condition checks) by the IDE.Here, we study how we can reuse two existing refactorings toimplement a more complex refactoring, and use heuristics to derivesuitable input arguments for the new refactoring. We show that ourcombination of the Extract Method and Move Method refactoring canautomatically improve the code quality on a large Java code base
Saying HelloWorld with QVTR-XSLT - A Solution to the TTC 2011 Instructive Case
In this short paper we present our solution for the Hello World case study of
the Transformation Tool Contest (TTC) 2011 using the QVTR-XSLT tool. The tool
supports editing and execution of the graphical notation of QVT Relations
language. The case study consists of a set of simple transformation tasks which
covers the basic functions required for a transformation language, such as
creating, reading/querying, updating and deleting of model elements. We design
a transformation for each of the tasks.Comment: In Proceedings TTC 2011, arXiv:1111.440
Deadlock checking by a behavioral effect system for lock handling
AbstractDeadlocks are a common error in programs with lock-based concurrency and are hard to avoid or even to detect. One way for deadlock prevention is to statically analyze the program code to spot sources of potential deadlocks. Often static approaches try to confirm that the lock-taking adheres to a given order, or, better, to infer that such an order exists. Such an order precludes situations of cyclic waiting for each other’s resources, which constitute a deadlock.In contrast, we do not enforce or infer an explicit order on locks. Instead we use a behavioral type and effect system that, in a first stage, checks the behavior of each thread or process against the declared behavior, which captures potential interaction of the thread with the locks. In a second step on a global level, the state space of the behavior is explored to detect potential deadlocks. We define a notion of deadlock-sensitive simulation to prove the soundness of the abstraction inherent in the behavioral description. Soundness of the effect system is proven by subject reduction, formulated such that it captures deadlock-sensitive simulation.To render the state-space finite, we show two further abstractions of the behavior sound, namely restricting the upper bound on re-entrant lock counters, and similarly by abstracting the (in general context-free) behavioral effect into a coarser, tail-recursive description. We prove our analysis sound using a simple, concurrent calculus with re-entrant locks
NP-Free: A Real-Time Normalization-free and Parameter-tuning-free Representation Approach for Open-ended Time Series
As more connected devices are implemented in a cyber-physical world and data
is expected to be collected and processed in real time, the ability to handle
time series data has become increasingly significant. To help analyze time
series in data mining applications, many time series representation approaches
have been proposed to convert a raw time series into another series for
representing the original time series. However, existing approaches are not
designed for open-ended time series (which is a sequence of data points being
continuously collected at a fixed interval without any length limit) because
these approaches need to know the total length of the target time series in
advance and pre-process the entire time series using normalization methods.
Furthermore, many representation approaches require users to configure and tune
some parameters beforehand in order to achieve satisfactory representation
results. In this paper, we propose NP-Free, a real-time Normalization-free and
Parameter-tuning-free representation approach for open-ended time series.
Without needing to use any normalization method or tune any parameter, NP-Free
can generate a representation for a raw time series on the fly by converting
each data point of the time series into a root-mean-square error (RMSE) value
based on Long Short-Term Memory (LSTM) and a Look-Back and Predict-Forward
strategy. To demonstrate the capability of NP-Free in representing time series,
we conducted several experiments based on real-world open-source time series
datasets. We also evaluated the time consumption of NP-Free in generating
representations.Comment: 9 pages, 12 figures, 9 tables, and this paper was accepted by 2023
IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC
2023
Solving the TTC 2011 Compiler Optimization Case with QVTR-XSLT
In this short paper we present our solution for the Compiler Optimization
case study of the Transformation Tool Contest (TTC) 2011 using the QVTR-XSLT
tool. The tool supports editing and execution of the graphical notation of QVT
Relations languageComment: In Proceedings TTC 2011, arXiv:1111.440
Stream-based dynamic data race detection
Detecting data races in modern code executing on multicore processors is challenging. Instrumentation-based techniques for race detection not only have a high performance impact, but also are not likely to be certified for safety-critical systems. This paper presents a data race detector based on the well-known lockset algorithm in the runtime verification language TeSSLa, which is a stream-based specification using dynamic data structures to record lock operations and memory accesses. Such a specification can then be instantiated with particular parameters to make it suitable for the more limited planned monitoring using field- programmable gate arrays
TTSS'11 - 5th International Workshop on Harnessing Theories for Tool Support in Software
The aim of the workshop is to bring together practitioners and researchers from academia, industry and government to present and discuss ideas about:
• How to deal with the complexity of software projects by multi-view modeling and separation of concerns about the design of functionality, interaction, concurrency, scheduling, and nonfunctional requirements, and
• How to ensure correctness and dependability of software by integrating formal methods and tools for modeling, design, verification and validation into design and development processes and environments.
• Case studies and experience reports about harnessing static analysis tools such as model checking, theorem proving, testing, as well as runtime monitoring
- …